9 research outputs found

    Editor's Note

    Get PDF
    The International Journal of Interactive Multimedia and Artificial Intelligence - IJIMAI - provides a space in which scientists and professionals can report about new advances in Artificial Intelligence (AI). On this occasion, for the last edition of the year, I am pleased to present a regular issue including different investigations covering aspects and problems in AI and its use in various fields such as medicine, education, image analysis, protection of data, among others

    Sistema genérico de razonamiento basado en casos (CBR) multi-clase como soporte al diagnóstico médico mediante técnicas de reconocimiento de patrones

    Get PDF
    [EN]Learning from experience is a process that occurs naturally in humans, and the knowledge generated by this process becomes the basis for solutions to everyday problems. In the field of artificial intelligence, specifically in the area of machine learning, aimed at emulating such ability, the methodology called case-based reasoning (CBR) has arisen. The core of a CBR system is the case, usually denoting a previous problem or experience, which has been captured and learned, and can be then reused to solve future problems. The life cycle of a CBR-based system consists of four main stages: Recovery, wherein the problem is identified and past cases similar to the new case are found; Adaptation, wherein a solution is suggested from the recovered cases; Revision, in which the proposed solution is evaluated; And finally learning, wherein the system is updated to learn from experience. The CBR systems have demonstrated their high applicability in the field of health, specifically in medical diagnosis so that the symptoms represent the problem (new case) and, therefore, the solution obtained is to be the recommended diagnosis. In the state of the art of CBR applied to medical diagnosis, there have been developed some studies mainly focusing on improvements of the recovery stage. Nonetheless, there are still some open issues related to case representation and multiclass problem solving. In fact, if the representation of the cases is not adequate, the results of the recovery stage are not expected to be optimal. In addition, most CBR systems have been designed to solve biclass problems, thereby limiting the automatic adaptation stage to two possible solutions (typically, normal or pathological). Then such systems are not able to categorize the condition of a pathology nor to identify differential diagnoses. In this thesis, a proposal of a generic CBR system for the identification of multiple diagnostic cases using improved recovery and adaptation stages is presented. For this purpose, SAM (Improved Adaptation System) is proposed, which consists of a system that uses two cascade classifiers that improves the classification performance of ill patients. This proposal arises as a result of a comparative study of data representation techniques to obtain the case vector and different multiclass classifiers for the adaptation stage. In addition, as a significant contribution of this work, an interface is developed that communicates to the specialist the belonging probabilities of the new case to each of the possible diagnoses. Experimentally, it is verified that SAM -using two classifiers in cascade based on K-NN along with an appropriate selection of characteristics in the pre-process- generates satisfactory results in terms of classification measures while providing the specialist with intelligible results of the case recovery.[ES]El aprendizaje a partir de la experiencia es un proceso que se da de forma natural en los seres humanos, y el conocimiento generado con dicho proceso se convierte en la base para establecer soluciones a problemas cotidianos. En el campo de la inteligencia artificial, específicamente en el área del aprendizaje de máquina, pretendiendo emular esta habilidad del ser humano, ha surgido la metodología denominada razonamiento basado en casos (CBR).El núcleo de un sistema de CBR es el caso, que denota usualmente una situación problema o experiencia previa, la cual ha sido capturada y aprendida, y puede ser reutilizada para resolver problemas futuros. El ciclo de vida de un sistema basado en CBR consiste en cuatro etapas principales: Recuperación, donde se identifica el problema y se encuentran casos pasados similares al nuevo caso; adaptación, donde se sugiere una solución a partir de los casos recuperados; revisión, en la cual se evalúa la solución propuesta; y, finalmente, aprendizaje, donde se actualiza el sistema para aprender de la experiencia. Los sistemas de CBR han demostrado su alta aplicabilidad en el campo de la salud, específicamente en diagnóstico médico de forma que los síntomas representan el problema (nuevo caso) y, por tanto, la solución obtenida será el diagnóstico recomendado. En el estado del arte de CBR aplicado a diagnóstico médico, se encuentran algunos estudios que principalmente se enfocan en mejoras de la etapa de recuperación. No obstante, aún existen problemas abiertos relacionados con la representación de los casos y la solución de problemas multiclase. En efecto, si la representación de los casos no es adecuada, los resultados de la recuperación no serán óptimos. Además, la mayoría de los sistemas de CBR han sido diseñados para resolver problemas biclase, limitando entonces la etapa de adaptación automática a dos únicas posibles soluciones (típicamente, normal o patológico), con lo cual dichos sistemas pierden la capacidad de categorizar el estado de una patología o de identificar diagnósticos diferenciales. En este trabajo de tesis, se presenta una propuesta de sistema genérico de CBR para la identificación de múltiples casos diagnósticos usando etapas de recuperación y adaptación mejoradas. Para este propósito, se plantea SAM (Sistema de Adaptación Mejorada) que consiste en un sistema que utiliza dos clasificadores en cascada que mejora el desempeño de la clasificación de los pacientes enfermos. Dicha propuesta surge como resultado de un estudio comparativo de técnicas de representación de datos para obtener el vector de casos y de diferentes clasificadores multiclase en la etapa de adaptación. Además, como aporte significativo de este trabajo, se desarrolla una interfaz que comunica al especialista las probabilidades de pertenencia del nuevo caso a cada uno de los posibles diagnósticos. Experimentalmente, se comprueba que SAM, usando dos clasificadores en cascada basados en K-NN y con una apropiada selección de características en el pre-proceso, genera resultados satisfactorios en términos de medidas de clasificación mientras provee al especialista de forma inteligible los resultados de la recuperación de casos

    Electromiographic Signal Processing Using Embedded Artificial Intelligence: An Adaptive Filtering Approach

    Get PDF
    In recent times, Artificial Intelligence (AI) has become ubiquitous in technological fields, mainly due to its ability to perform computations in distributed systems or the cloud. Nevertheless, for some applications -as the case of EMG signal processing- it may be highly advisable or even mandatory an on-the-edge processing, i.e., an embedded processing methodology. On the other hand, sEMG signals have been traditionally processed using LTI techniques for simplicity in computing. However, making this strong assumption leads to information loss and spurious results. Considering the current advances in silicon technology and increasing computer power, it is possible to process these biosignals with AI-based techniques correctly. This paper presents an embedded-processing-based adaptive filtering system (here termed edge AI) being an outstanding alternative in contrast to a sensor-computer- actuator system and a classical digital signal processor (DSP) device. Specifically, a PYNQ-Z1 embedded system is used. For experimental purposes, three methodologies on similar processing scenarios are compared. The results show that the edge AI methodology is superior to benchmark approaches by reducing the processing time compared to classical DSPs and general standards while maintaining the signal integrity and processing it, considering that the EMG system is not LTI. Likewise, due to the nature of the proposed architecture, handling information exhibits no leakages. Findings suggest that edge computing is suitable for EMG signal processing when an on-device analysis is required

    Integración de herramientas para la evaluación de indicadores de publicaciones científicas

    Get PDF
    There are more and more technological tools that emerge every day to help in the management of the editorial processes carried out by a scientific journal, from the submission of a paper to its publication and follow-up of citations. However, given this increase in options, it becomes more complex for managers to properly monitor and validate the processes, since the information is more dispersed and on several occasions they lack interoperability between them. In the present work, an architecture for the integration of the different minimum tools necessary for the editorial flow is proposed, making an analytical and technical comparison, which allows obtaining the best integration of these, which allows simplifying in a single platform the entire required information. As a result, a first prototype of the platform is presented applying the defined architecture, obtaining favorable results for the monitoring and validation of the works and indicators of the journals.Cada vez son más las herramientas tecnológicas que surgen cada día para ayudar en la gestión de los procesos editoriales que lleva una revista científica, desde el envío de un trabajo hasta su publicación y seguimiento de citas. Sin embargo, dado este incremento de opciones se vuelve más complejo para los responsables hacer un adecuado seguimiento y validación de los procesos, pues la información se encuentra más dispersa y a veces carecen de interoperabilidad entre ellas. En el presente trabajo se propone una arquitectura de integración de las diferentes herramientas mínimas necesarias para el flujo editorial, haciendo una comparación analítica y técnica, que permita obtener la mejor integración de estas, para simplificar en una única plataforma toda la información requerida. Como resultado se presenta un primer prototipo de plataforma aplicando la arquitectura definida, obteniendo resultados favorables para el seguimiento y validación de los trabajos e indicadores de las revistas

    Performance improvement and Lyapunov stability analysis of nonlinear systems using hybrid optimization techniques

    No full text
    Using Hybrid optimization algorithms for nonlinear systems analysis is a novel approach. It is a powerful technique that uses the exploitation ability of one algorithm and the exploration ability of another algorithm, to find the best solution. Literature survey reveals that hybrid algorithms not only show quality response but also give faster convergence of error for nonlinear systems. In this paper, hybrid optimization techniques based proportional integral derivative (PID) controller is used for benchmark problems: Continuous stirred tank reactor (CSTR), Inverted pendulum and blood glucose system. Two recent hybrid algorithms: Particle swarm optimization-Gravitational search algorithm (PSO-GSA) and Particle swarm optimization-Grey wolf algorithm (PSO-GWO) are implemented to control the temperature and concentration of CSTR, pendulum angle of inverted pendulum, glucose concentration and insulin level of blood glucose system. In PID and PSOGWO algorithms, the exploration abilities of GSA and GWO combined with the exploitation ability of PSO have been used. The performance of these algorithms is then compared with individual PSO, GSA, and GWO algorithms proving their superiority. Stability is ensured using the Lyapunov approach while the robustness of the systems is checked using the parameter perturbation technique. Simulation results show substantial improvement in the performance of these systems by using these meta-heuristic hybrid optimization techniques. A comparative analysis of these algorithms has also been done

    Automated segmentation of leukocyte from hematological images—a study using various CNN schemes

    No full text
    Medical images play a fundamental role in disease screening, and automated evaluation of these images is widely preferred in hospitals. Recently, Convolutional Neural Network (CNN) supported medical data assessment is widely adopted to inspect a set of medical imaging modalities. Extraction of the leukocyte section from a thin blood smear image is one of the essential procedures during the preliminary disease screening process. The conventional segmentation needs complex/hybrid procedures to extract the necessary section and the results achieved with conventional methods sometime tender poor results. Hence, this research aims to implement the CNN-assisted image segmentation scheme to extract the leukocyte section from the RGB scaled hematological images. The proposed work employs various CNN-based segmentation schemes, such as SegNet, U-Net, and VGG-UNet. We used the images from the Leukocyte Images for Segmentation and Classification (LISC) database. In this work, five classes of the leukocytes are considered, and each CNN segmentation scheme is separately implemented and evaluated with the ground-truth image. The experimental outcome of the proposed work confirms that the overall results accomplished with the VGG-UNet are better (Jaccard-Index = 91.5124%, Dice-Coefficient = 94.4080%, and Accuracy = 97.7316%) than those of the SegNet and U-Net schemes Finally, the merit of the proposed scheme is also confirmed using other similar image datasets, such as Blood Cell Count and Detection (BCCD) database and ALL-IDB2. The attained result confirms that the proposed scheme works well on hematological images and offers better performance measure values

    Kernel-based framework for spectral dimensionality reduction and clustering formulation: A theoretical study

    Get PDF
    This work outlines a unified formulation to represent spectral approaches for both dimensionality reduction and clustering. Proposed formulation starts with a generic latent variable model in terms of the projected input data matrix.Particularly, such a projection maps data onto a unknown high-dimensional space. Regarding this model, a generalized optimization problem is stated using quadratic formulations and a least-squares support vector machine.The solution of the optimization is addressed through a primal-dual scheme.Once latent variables and parameters are determined, the resultant model outputs a versatile projected matrix able to represent data in a low-dimensional space, as well as to provide information about clusters. Particularly, proposedformulation yields solutions for kernel spectral clustering and weighted-kernel principal component analysis

    Integração de ferramentas para avaliação de indicadores de publicações científicas

    No full text
    There are more and more technological tools that emerge every day to help in the management of the editorial processes carried out by a scientific journal, from the submission of a paper to its publication and follow-up of citations. However, given this increase in options, it becomes more complex for managers to properly monitor and validate the processes, since the information is more dispersed and on several occasions they lack interoperability between them. In the present work, an architecture for the integration of the different minimum tools necessary for the editorial flow is proposed, making an analytical and technical comparison, which allows obtaining the best integration of these, which allows simplifying in a single platform the entire required information. As a result, a first prototype of the platform is presented applying the defined architecture, obtaining favorable results for the monitoring and validation of the works and indicators of the journals.Cada vez son más las herramientas tecnológicas que surgen cada día para ayudar en la gestión de los procesos editoriales que lleva una revista científica, desde el envío de un trabajo hasta su publicación y seguimiento de citas. Sin embargo, dado este incremento de opciones se vuelve más complejo para los responsables hacer un adecuado seguimiento y validación de los procesos, pues la información se encuentra más dispersa y en varias oportunidades carecen de interoperabilidad entre ellas. En el presente trabajo se propone una arquitectura de integración de las diferentes herramientas mínimas necesarias para el flujo editorial, haciendo una comparación analítica y técnica, que permita obtener done se presenta la mejor integración de estas, lo que permita simplificar en una única plataforma toda la información requerida. Como resultado se presenta un primer prototipo de plataforma aplicando la arquitectura definida, obteniendo resultados favorables para el seguimiento y validación de los trabajos e indicadores de las revistas.São cada vez mais as ferramentas tecnológicas que surgem a cada dia para ajudar a gerenciar os processos editoriais que uma revista científica realiza, desde o envio de um trabalho até sua publicação e rastreamento de citações. No entanto, diante desse aumento de opções, torna-se mais complexo para os responsáveis ​​monitorar e validar adequadamente os processos, pois as informações são mais dispersas e, em várias ocasiões, carecem de interoperabilidade entre eles. En el presente trabajo se propone una arquitectura de integración de las diferentes herramientas mínimas necesarias para el flujo editorial, haciendo una comparación analítica y técnica, que permita obtener done se presenta la mejor integración de estas, lo que permita simplificar en una única plataforma toda la informação requerida. Como resultado, é apresentado um primeiro protótipo de plataforma, aplicando a arquitetura definida, obtendo resultados favoráveis ​​para o acompanhamento e validação do trabalho e indicadores dos periódicos

    Semiautomatic Grading of Short Texts for Open Answers in Higher Education

    No full text
    Grading student activities in online courses is a time-expensive task, especially with a high number of students in the course. To avoid a bottleneck in the continuous evaluation process, quizzes with multiple choice questions are frequently used. However, a quiz fails on the provision of formative feedback to the student. This work presents PLeNTaS, a system for the automatic grading of short answers from open domains, that reduces the time required for the grading task and offers formative feedback to the students. It is based on the analysis of the text from the point of view of three different levels: orthography, syntax, and semantics. The validation of the system will consider the correlation of the assigned grade with the human grade, the utility of the automatically generated feedback and the pedagogical impact caused by the system usage in the course
    corecore